After completing this lesson, you’ll be able to:
FME Server adds an extra dimension to FME performance, that of scalability. In terms of performance, the item most easily scalable is the number of FME engines.
Increasing the number of engines supports a higher volume of jobs, and the FME Server Core contains a Software Load Balancer (SLB) to distribute jobs to the FME engines in a balanced way.
By default, utilizing multiple engines is only possible when you have multiple workspaces to run. When you have only a single workspace and wish to process it more efficiently on FME Server, then you need to divide that workspace into multiple jobs.
To do so, I can create a parent workspace that divides my source data into separate parts and sends each to a different worker job using the FMEServerJobSubmitter transformer.
For example, I can calculate the bounds of tiles to be created and share the load over multiple server engines by running the workspace separately for each tile.
New to FME 2020.0, FME Server now supports a usage-based model for engine pricing. CPU-usage pricing lets you spin up multiple engines in response to bursty workflows, paying for CPU time rather than per engine. In use cases where a lot of data needs to be processed at once, but not on a frequent basis, CPU-usage pricing can be performant from both a workflow and cost perspective. Learn more by reading this blog post.
FME Cloud is a hosted deployment option for FME Server. The benefit is that you don’t have to purchase FME Server, merely make use of it whenever you have a job that can take advantage of its power. You can create an instance with as many engines as you want, keeping in mind the hardware you select will have an impact on performance.
The key to automating this for performance benefits are the FME Cloud custom transformers available on the FME Store:
With the FMECloudInstanceLauncher transformer (or the FMECloudInstanceController), I can run my parent workspace, have it automatically start an FME Cloud instance, and run one or more worker jobs on it.
This way I can start a new instance for each job, or run several jobs on one instance, depending on the type of instance and how many engines it has running on it.